34 research outputs found

    Software Engineering and Petri Nets

    Get PDF
    This booklet contains the proceedings of the Workshop on Software Engineering and Petri Nets (SEPN), held on June 26, 2000. The workshop was held in conjunction with the 21st International Conference on Application and Theory of Petri Nets (ICATPN-2000), organised by the CPN group of the Department of Computer Science, University of Aarhus, Denmark. The SEPN workshop papers are available in electronic form via the web page:http://www.daimi.au.dk/pn2000/proceeding

    PLCTOOLS: Graph Transformation Meets PLC Design

    Get PDF
    Abstract This paper presents PLCTOOLS, a formal environment for designing and simulating programmable controllers. Control models are specified with IEC FED (Function Block Diagram), and translated into functionally equivalent HLTPNs (High-Level Timed Petri Nets), through MetaEnv, for analysis and simulation and obtained results are presented in terms of suitable animations of FED blocks. The peculiarity with FBD is that it does not come with a fixed set of syntactic elements; it allows users to add as many new blocks as they want. Consequently, each time users want to add a new FBD block with PLCTOOLS, they must provide the concrete syntax, to add it to the library of available blocks, but also the associated HLTPN, to allow MetaEnv to build the formal representation

    Automatic Steering of Behavioral Model Inference

    Get PDF
    Many testing and analysis techniques use finite state mod-els to validate and verify the quality of software systems. Since the specification of such models is complex and time-consuming, researchers defined several techniques to extract finite state models from code and traces. Automatically generating models requires much less effort than designing them, and thus eases the verification and validation of large software systems. However, when models are inferred au-tomatically, the precision of the mining process is critical. Behavioral models mined with imprecise processes can in-clude many spurious behaviors, and can thus compromise the results of testing and analysis techniques that use those models. In this paper, we increase the precision of automata in-ferred from execution traces, by leveraging two learning tech-niques. We first mine execution traces to infer statistically significant temporal properties that capture relations be-tween non consecutive and possibly distant events. We then incrementally refine a simple initial automaton by merg-ing likely equivalent states. We identify equivalent states by analyzing set of consecutive events, and we use the in-ferred temporal properties to evaluate whether two equiv-alent states can be merged or not. We merge equivalent states only if the merging does violate any temporal prop-erty, since a merging that violates temporal properties is likely to introduce an imprecise generalization. Our gener-alization process that preserves temporal properties while merging states avoids breaking non-local relations, and thus solves one of the major cause of overgeneralized models. Thus, mined properties steer the learning of behavioral mod-els. The technique is completely automated and generates an automaton that both accepts the input traces and satis-fies the mined temporal properties. ∗This work has been partially supported by the Europea

    Formal Verification with Confidence Intervals to Establish Quality of Service Properties of Software Systems

    Get PDF
    Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analyzed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behavior. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analyzed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems

    A Technique for Verifying Component-Based Software

    Get PDF
    Component-based software systems raise new problems for the testing community: the reuse of components suggests the possibility of reducing testing costs by reusing information about the quality of the software components. This paper addresses the problem of testing evolving software systems, i.e., systems obtained by modifying and/or substituting some of their components. The paper proposes a technique to automatically identify behavioral di#erences between di#erent versions of the system, to deduce possible problems from inconsistent behaviors. The approach is based on the automatic distilling of invariants from in-field executions. The computed invariants are used to monitor the behavior of new components, and to reveal unexpected interactions. The event generated while monitoring system executions are presented to software engineers who can infer possible problems of the new versions

    Bidirectional symbolic analysis for effective branch testing

    No full text
    Structural coverage metrics, and in particular branch coverage, are popular approaches to measure the thoroughness of test suites. Unfortunately, the presence of elements that are not executable in the program under test and the difficulty of generating test cases for rare conditions impact on the effectiveness of the coverage obtained with current approaches. In this paper, we propose a new approach that combines symbolic execution and symbolic reachability analysis to improve the effectiveness of branch testing. Our approach embraces the ideal definition of branch coverage as the percentage of executable branches traversed with the test suite, and proposes a new bidirectional symbolic analysis for both testing rare execution conditions and eliminating infeasible branches from the set of test objectives. The approach is centered on a model of the analyzed execution space. The model identifies the frontier between symbolic execution and symbolic reachability analysis, to guide the alternation and the progress of bidirectional analysis towards the coverage targets. The experimental results presented in the paper indicate that the proposed approach can both find test inputs that exercise rare execution conditions that are not identified with state-of-the- art approaches and eliminate many infeasible branches from the coverage measurement. It can thus produce a modified branch coverage metric that indicates the amount of feasible branches covered during testing, and help team leaders and developers in estimating the amount of not-yet-covered feasible branches. The approach proposed in this paper suffers less than the other approaches from particular cases that may trap the analysis in unbounded Loops

    Improving UML with Petri nets1 1This work has been partially supported by Ministero della Ricerca Scientifica e Technologica under the SALADIM Project and by Polipecnico di Milano under the TATOOS Project.

    Get PDF
    Abstract UML is the OMG standard notation for object-oriented modeling. It is easy, graphical and appealing, but in several cases still too imprecise. UML is strong as modeling means, supplies several different diagrammatic notations for representing the different aspects of a system under development, but lacks simulation and verifiability capabilities. This drawback comes from its semi-formal nature: UML is extremely precise and wide if we consider syntactical aspects, but its semantics is as precise as those of informal notations. Scientists and users, together with standardization efforts (UML 2.0), are trying to overcome this problem, but as side effect, they are also limiting the intrinsic flexibility of UML. Moreover, several formalization efforts concentrated on its static elements (for example, inheritance), leaving dynamic semantics almost untouched. In this paper we propose the paring of UML dynamic models with high-level timed Petri nets (HLTPN) to obtain a flexible and customizable means to reason on the dynamic aspects of object-oriented models, to simulate particular parts of these models, and if necessary analyze them. The proposal exploits rules to ascribe main UML elements with formal semantics in terms of functionally equivalent HLTPNs and to show results (from execution and analysis) as decorations to UML symbols. Besides sketching the approach, the paper presents also some experiences we have gained so far with it and a research agenda to identify other possible uses of the dual definition of the notation

    Software Testing and Analysis: Process, Principles and Techniques

    No full text
    The first comprehensive book on software test and analysis You can\u27t "test quality into" a software product, but neither can you build a quality software product without test and analysis. Software test and analysis is increasingly recognized, in research and in industrial practice, as a core challenge in software engineering and computer science. Software Testing and Analysis: Process, Principles, and Techniques is the first book to present a range of complementary software test and analysis techniques in an integrated, coherent fashion. It covers a full spectrum of topics from basic principles and underlying theory to organizational and process issues in real-world application. The emphasis throughout is on selecting a complementary set of practical techniques to achieve an acceptable level of quality at an acceptable cost. Highlights of the book include Interplay among technical and non-technical issues in crafting an approach to software quality, with chapters devoted to planning and monitoring the software quality process. A selection of practical techniques ranging from inspection to automated program and design analyses to unit, integration, system, and regression testing, with technical material set in the context of real-world problems and constraints in software development. A coherent view of the state of the art and practice, with technical and organizational approaches to push the state of practice toward the state of the art. Throughout, the text covers techniques that are suitable for near-term application, with sufficient technical background to help you know how and when to apply them. Exercises reinforce the instruction and ensure that you master each topic before proceeding. By incorporating software testing and analysis techniques into modern practice, Software Testing and Analysis: Process, Principles, and Techniques provides both students and professionals with realistic strategies for reliable and cost-effective software development
    corecore